LowLevelParticleFilters

CI codecov

Types

We provide a number of filter types

  • ParticleFilter: This filter is simple to use and assumes that both dynamics noise and measurement noise are additive.
  • AuxiliaryParticleFilter: This filter is identical to ParticleFilter, but uses a slightly different proposal mechanism for new particles.
  • AdvancedParticleFilter: This filter gives you more flexibility, at the expense of having to define a few more functions. More instructions on this type below.
  • KalmanFilter. Is what you would expect. Has the same features as the particle filters, but is restricted to linear dynamics and gaussian noise.
  • UnscentedKalmanFilter. Is also what you would expect. Has almost the same features as the Kalman filters, but handle nonlinear dynamics and measurement model, still requires an additive Gaussian noise model.
  • ExtendedKalmanFilter. Runs a regular Kalman filter on linearized dynamics. Uses ForwardDiff.jl for linearization.

Functionality

  • Filtering
  • Smoothing (smooth)
  • Parameter estimation using ML or PMMH (Particle Marginal Metropolis Hastings)

Particle filter

Defining a particle filter is straightforward, one must define the distribution of the noise df in the dynamics function, dynamics(x,u,t) and the noise distribution dg in the measurement function measurement(x,u,t). The distribution of the initial state d0 must also be provided. An example for a linear Gaussian system is given below.

using LowLevelParticleFilters, LinearAlgebra, StaticArrays, Distributions, Plots

Define problem

n = 2   # Dimension of state
m = 2   # Dimension of input
p = 2   # Dimension of measurements
N = 500 # Number of particles

const dg = MvNormal(p,1.0)          # Measurement noise Distribution
const df = MvNormal(n,1.0)          # Dynamics noise Distribution
const d0 = MvNormal(randn(n),2.0)   # Initial state Distribution
IsoNormal(
dim: 2
μ: [0.7923704311571288, 0.053478596528521966]
Σ: [4.0 0.0; 0.0 4.0]
)

Define random linear state-space system

Tr = randn(n,n)
const A = SMatrix{n,n}(Tr*diagm(0=>LinRange(0.5,0.95,n))/Tr)
const B = @SMatrix randn(n,m)
const C = @SMatrix randn(p,n)
2×2 StaticArrays.SMatrix{2, 2, Float64, 4} with indices SOneTo(2)×SOneTo(2):
  0.244706  -0.239882
 -0.68271   -0.106697

Next, we define the dynamics and measurement equations, they both take the signature (x,u,t) = (state, input, time)

dynamics(x,u,t) = A*x .+ B*u
measurement(x,u,t) = C*x
vecvec_to_mat(x) = copy(reduce(hcat, x)') # Helper function
vecvec_to_mat (generic function with 1 method)

We are now ready to define and use a filter

pf = ParticleFilter(N, dynamics, measurement, df, dg, d0)
ParticleFilter{PFstate{StaticArrays.SVector{2, Float64}, Float64}, typeof(Main.ex-lingauss.dynamics), typeof(Main.ex-lingauss.measurement), Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.IsoNormal, DataType, Random.MersenneTwister}
  state: PFstate{StaticArrays.SVector{2, Float64}, Float64}
  dynamics: dynamics (function of type typeof(Main.ex-lingauss.dynamics))
  measurement: measurement (function of type typeof(Main.ex-lingauss.measurement))
  dynamics_density: Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}
  measurement_density: Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}
  initial_density: Distributions.IsoNormal
  resample_threshold: Float64 0.1
  resampling_strategy: LowLevelParticleFilters.ResampleSystematic <: LowLevelParticleFilters.ResamplingStrategy
  rng: Random.MersenneTwister

With the filter in hand, we can simulate from its dynamics and query some properties

xs,u,y = simulate(pf,200,df) # We can simulate the model that the pf represents
pf(u[1], y[1])               # Perform one filtering step using input u and measurement y
particles(pf)                # Query the filter for particles, try weights(pf) or expweights(pf) as well
x̂ = weigthed_mean(pf)        # using the current state
2-element Vector{Float64}:
 -0.5065211725495822
 -2.017746113733322

If you want to perform filtering using vectors of inputs and measurements, try any of the functions

x,w,we,ll = forward_trajectory(pf, u, y) # Filter whole vectors of signals
x̂,ll = mean_trajectory(pf, u, y)
trajectorydensity(pf,x,w,u,y,xreal=xs)

If MonteCarloMeasurements.jl is loaded, you may transform the output particles to Matrix{MonteCarloMeasurements.Particles} with the layout T × n_states using Particles(x,we). Internally, the particles are then resampled such that they all have unit weight. This is conventient for making use of the plotting facilities of MonteCarloMeasurements.jl.

For a full usage example, see the benchmark section below or example_lineargaussian.jl

Smoothing

We also provide a particle smoother, based on forward filtering, backward simulation (FFBS)

N     = 2000 # Number of particles
T     = 200  # Number of time steps
M     = 100  # Number of smoothed backwards trajectories
pf    = ParticleFilter(N, dynamics, measurement, df, dg, d0)
du    = MvNormal(2,1)     # Control input distribution
x,u,y = simulate(pf,T,du) # Simulate trajectory using the model in the filter
tosvec(y) = reinterpret(SVector{length(y[1]),Float64}, reduce(hcat,y))[:] |> copy
x,u,y = tosvec.((x,u,y))

xb,ll = smooth(pf, M, u, y) # Sample smooting particles
xbm   = smoothed_mean(xb)   # Calculate the mean of smoothing trajectories
xbc   = smoothed_cov(xb)    # And covariance
xbt   = smoothed_trajs(xb)  # Get smoothing trajectories
xbs   = [diag(xbc) for xbc in xbc] |> vecvec_to_mat .|> sqrt
plot(xbm', ribbon=2xbs, lab="PF smooth")
plot!(vecvec_to_mat(x), l=:dash, lab="True")

We can plot the particles themselves as well

plot(vecvec_to_mat(x), l=(4,), layout=(2,1), show=false)
scatter!(xbt[1,:,:]', subplot=1, show=false, m=(1,:black, 0.5), lab="")
scatter!(xbt[2,:,:]', subplot=2, m=(1,:black, 0.5), lab="")

Kalman filter

A Kalman filter is easily created using the constructor. Many of the functions defined for particle filters, are defined also for Kalman filters, e.g.:

eye(n) = Matrix{Float64}(I,n,n)
kf     = KalmanFilter(A, B, C, 0, eye(n), eye(p), MvNormal([1.,1.]))
ukf    = UnscentedKalmanFilter(dynamics, measurement, eye(n), eye(p), MvNormal([1.,1.]))
xf,xt,R,Rt,ll = forward_trajectory(kf, u, y) # filtered, prediction, pred cov, filter cov, loglik
xT,R,lls = smooth(kf, u, y) # Smoothed state, smoothed cov, loglik
([[0.5224996486976393, 0.4114498091050312], [-0.9219899054800378, -1.7793866020697617], [-1.7122729716159157, -2.441200912676937], [-2.5786067325657913, -2.873212333340461], [-5.202232262221678, -5.045118573670819], [-7.454817605966369, -5.446661829617769], [-9.419882251629486, -6.7868986283101815], [-9.846517705873024, -6.830564262007871], [-12.50160253693361, -8.851477692373482], [-13.658993177561426, -8.61917101511456]  …  [-2.2794694885568068, -2.375424133195196], [-2.951658311411961, -3.0987769609116937], [-4.791188260510735, -3.877505875071652], [-7.626489249820872, -5.704045772319542], [-7.2669718026994765, -4.025079574502221], [-7.664966806725667, -4.2438924452352085], [-9.680084968359774, -6.866058538738943], [-10.573097929376845, -6.907599914637785], [-11.700265718249835, -7.236249280245263], [-10.330681171388402, -5.696913679811797]], [[0.5425382778012917 -0.05910781113967281; -0.05910781113967281 0.8849277269538998], [0.6504619129370122 -0.0038106534186991003; -0.0038106534186991003 1.140660290462519], [0.678290768329716 0.026130783342420527; 0.026130783342420527 1.217590565045441], [0.6867433734254172 0.03786915116541001; 0.03786915116541001 1.2413336483376853], [0.6895079197218145 0.04203702166592227; 0.04203702166592227 1.2487972896679607], [0.6904329418590438 0.043457708849407445; 0.043457708849407445 1.2511737237683713], [0.6907430631746023 0.043932754953417974; 0.043932754953417974 1.2519369212220994], [0.6908465721984296 0.04409003787289423; 0.04409003787289423 1.2521834173647672], [0.6908809341626267 0.04414183067905886; 0.04414183067905886 1.2522633266575518], [0.6908922884611246 0.044158832369101064; 0.044158832369101064 1.2522892949957758]  …  [0.6909044604299592 0.0441787692284773; 0.0441787692284773 1.2523224061901952], [0.6909164201122477 0.04419966544950066; 0.04419966544950066 1.2523589305822251], [0.6909503729874045 0.04425824250950572; 0.04425824250950572 1.2524600933104637], [0.6910480337178172 0.044423223287551605; 0.044423223287551605 1.2527395431283344], [0.6913350016955182 0.044891253489030475; 0.044891253489030475 1.2535081364324865], [0.6922077974369298 0.04623310061371849; 0.04623310061371849 1.255607055871681], [0.6950107251882331 0.05013541669171251; 0.05013541669171251 1.2612726368547686], [0.7047801755787559 0.061670419380220054; 0.061670419380220054 1.2762797969698925], [0.7429086177524422 0.09615218030035153; 0.09615218030035153 1.3148534706781125], [0.9135442804898514 0.19770712884769426; 0.19770712884769426 1.4096173850369904]], -626.2470549778681)

It can also be called in a loop like the pf above

for t = 1:T
    kf(u,y) # Performs both correct and predict!!
    # alternatively
    ll += correct!(kf, y, t) # Returns loglik
    x   = state(kf)
    R   = covariance(kf)
    predict!(kf, u, t)
end

Unscented Kalman Filter

The UKF takes the same arguments as a regular KalmanFilter, but the matrices definiting the dynamics are replaced by two functions, dynamics and measurement, working in the same way as for the ParticleFilter above.

ukf    = UnscentedKalmanFilter(dynamics, measurement, eye(n), eye(p), MvNormal([1.,1.]))
UnscentedKalmanFilter{typeof(Main.ex-lingauss.dynamics), typeof(Main.ex-lingauss.measurement), StaticArrays.SMatrix{2, 2, Float64, 4}, StaticArrays.SMatrix{2, 2, Float64, 4}, Distributions.ZeroMeanFullNormal{Tuple{Base.OneTo{Int64}}}, Distributions.ZeroMeanDiagNormal{Tuple{Base.OneTo{Int64}}}, StaticArrays.SVector{2, Float64}, Vector{Float64}, Matrix{Float64}}
  dynamics: dynamics (function of type typeof(Main.ex-lingauss.dynamics))
  measurement: measurement (function of type typeof(Main.ex-lingauss.measurement))
  R1: StaticArrays.SMatrix{2, 2, Float64, 4}
  R2: StaticArrays.SMatrix{2, 2, Float64, 4}
  R2d: Distributions.ZeroMeanFullNormal{Tuple{Base.OneTo{Int64}}}
  d0: Distributions.ZeroMeanDiagNormal{Tuple{Base.OneTo{Int64}}}
  xs: Array{StaticArrays.SVector{2, Float64}}((5,))
  x: Array{Float64}((2,)) [0.0, 0.0]
  R: Array{Float64}((2, 2)) [1.0 0.0; 0.0 1.0]
  t: Base.RefValue{Int64}

Troubleshooting

Tuning a particle filter can be quite the challenge. To assist with this, we provide som visualization tools

debugplot(pf,u[1:30],y[1:30], runall=true, xreal=x[1:30])
Time     Surviving    Effective nbr of particles
--------------------------------------------------------------
t:     1   1.000    2000.0
t:     2   1.000    1289.9
t:     3   1.000    1044.7
t:     4   1.000     767.1
t:     5   1.000     370.4
t:     6   1.000     279.7
t:     7   0.248    2000.0
t:     8   1.000    1302.9
t:     9   1.000     493.6
t:    10   1.000     461.0
t:    11   1.000     354.6
t:    12   0.297    2000.0
t:    13   1.000    1736.9
t:    14   1.000    1402.1
t:    15   1.000     715.1
t:    16   1.000     598.1
t:    17   0.264    2000.0
t:    18   0.262    2000.0
t:    19   1.000    1015.3
t:    20   1.000     742.7
t:    21   0.249    2000.0
t:    22   1.000     887.2
t:    23   1.000     785.1
t:    24   1.000     368.6
t:    25   1.000     288.8
t:    26   1.000     236.7
t:    27   0.278    2000.0
t:    28   1.000    1244.2
t:    29   1.000    1108.9
t:    30   1.000     731.1

The plot displays all states and all measurements. The heatmap in the background represents the weighted particle distributions per time step. For the measurement sequences, the heatmap represent the distibutions of predicted measurements. The blue dots corresponds to measured values. In this case, we simulated the data and we had access to states as well, if we do not have that, just omit xreal. You can also manually step through the time-series using

  • commandplot(pf,u,y; kwargs...)

For options to the debug plots, see ?pplot.

Parameter estimation

We provide som basic functionality for maximum likelihood estimation and MAP estimation

ML estimation

Plot likelihood as function of the variance of the dynamics noise

svec = exp10.(LinRange(-1.5,1.5,60))
llspf = map(svec) do s
    df = MvNormal(n,s)
    pfs = ParticleFilter(2000, dynamics, measurement, df, dg, d0)
    loglik(pfs,u,y)
end
plot( svec, llspf,
    xscale = :log10,
    title = "Log-likelihood",
    xlabel = "Dynamics noise standard deviation",
    lab = "PF",
)
vline!([svec[findmax(llspf)[2]]], l=(:dash,:blue), primary=false)

We can do the same with a Kalman filter

eye(n) = Matrix{Float64}(I,n,n)
llskf = map(svec) do s
    kfs = KalmanFilter(A, B, C, 0, s^2*eye(n), eye(p), d0)
    loglik(kfs,u,y)
end
plot!(svec, llskf, yscale=:identity, xscale=:log10, lab="Kalman", c=:red)
vline!([svec[findmax(llskf)[2]]], l=(:dash,:red), primary=false)

as we can see, the result is quite noisy due to the stochastic nature of particle filtering.

Smoothing using KF

kf = KalmanFilter(A, B, C, 0, eye(n), eye(p), MvNormal(2,1))
xf,xh,R,Rt,ll = forward_trajectory(kf, u, y) # filtered, prediction, pred cov, filter cov, loglik
xT,R,lls = smooth(kf, u, y) # Smoothed state, smoothed cov, loglik
([[0.5224996486976393, 0.4114498091050312], [-0.9219899054800378, -1.7793866020697617], [-1.7122729716159157, -2.441200912676937], [-2.5786067325657913, -2.873212333340461], [-5.202232262221678, -5.045118573670819], [-7.454817605966369, -5.446661829617769], [-9.419882251629486, -6.7868986283101815], [-9.846517705873024, -6.830564262007871], [-12.50160253693361, -8.851477692373482], [-13.658993177561426, -8.61917101511456]  …  [-2.2794694885568068, -2.375424133195196], [-2.951658311411961, -3.0987769609116937], [-4.791188260510735, -3.877505875071652], [-7.626489249820872, -5.704045772319542], [-7.2669718026994765, -4.025079574502221], [-7.664966806725667, -4.2438924452352085], [-9.680084968359774, -6.866058538738943], [-10.573097929376845, -6.907599914637785], [-11.700265718249835, -7.236249280245263], [-10.330681171388402, -5.696913679811797]], [[0.5425382778012917 -0.05910781113967281; -0.05910781113967281 0.8849277269538998], [0.6504619129370122 -0.0038106534186991003; -0.0038106534186991003 1.140660290462519], [0.678290768329716 0.026130783342420527; 0.026130783342420527 1.217590565045441], [0.6867433734254172 0.03786915116541001; 0.03786915116541001 1.2413336483376853], [0.6895079197218145 0.04203702166592227; 0.04203702166592227 1.2487972896679607], [0.6904329418590438 0.043457708849407445; 0.043457708849407445 1.2511737237683713], [0.6907430631746023 0.043932754953417974; 0.043932754953417974 1.2519369212220994], [0.6908465721984296 0.04409003787289423; 0.04409003787289423 1.2521834173647672], [0.6908809341626267 0.04414183067905886; 0.04414183067905886 1.2522633266575518], [0.6908922884611246 0.044158832369101064; 0.044158832369101064 1.2522892949957758]  …  [0.6909044604299592 0.0441787692284773; 0.0441787692284773 1.2523224061901952], [0.6909164201122477 0.04419966544950066; 0.04419966544950066 1.2523589305822251], [0.6909503729874045 0.04425824250950572; 0.04425824250950572 1.2524600933104637], [0.6910480337178172 0.044423223287551605; 0.044423223287551605 1.2527395431283344], [0.6913350016955182 0.044891253489030475; 0.044891253489030475 1.2535081364324865], [0.6922077974369298 0.04623310061371849; 0.04623310061371849 1.255607055871681], [0.6950107251882331 0.05013541669171251; 0.05013541669171251 1.2612726368547686], [0.7047801755787559 0.061670419380220054; 0.061670419380220054 1.2762797969698925], [0.7429086177524422 0.09615218030035153; 0.09615218030035153 1.3148534706781125], [0.9135442804898514 0.19770712884769426; 0.19770712884769426 1.4096173850369904]], -626.2470549778681)

Plot and compare PF and KF

plot(vecvec_to_mat(xT), lab="Kalman smooth", layout=2)
plot!(xbm', lab="pf smooth")
plot!(vecvec_to_mat(x), lab="true")

MAP estiamtion

To solve a MAP estimation problem, we need to define a function that takes a parameter vector and returns a particle filter

filter_from_parameters(θ, pf = nothing) = ParticleFilter(
    N,
    dynamics,
    measurement,
    MvNormal(n, exp(θ[1])),
    MvNormal(p, exp(θ[2])),
    d0,
)
filter_from_parameters (generic function with 2 methods)

The call to exp on the parameters is so that we can define log-normal priors

priors = [Normal(0,2),Normal(0,2)]
2-element Vector{Distributions.Normal{Float64}}:
 Distributions.Normal{Float64}(μ=0.0, σ=2.0)
 Distributions.Normal{Float64}(μ=0.0, σ=2.0)

Now we call the function log_likelihood_fun that returns a function to be minimized

ll = log_likelihood_fun(filter_from_parameters,priors,u,y)
#83 (generic function with 1 method)

Since this is a low-dimensional problem, we can plot the LL on a 2d-grid

function meshgrid(a,b)
    grid_a = [i for i in a, j in b]
    grid_b = [j for i in a, j in b]
    grid_a, grid_b
end
Nv       = 20
v        = LinRange(-0.7,1,Nv)
llxy     = (x,y) -> ll([x;y])
VGx, VGy = meshgrid(v,v)
VGz      = llxy.(VGx, VGy)
heatmap(
    VGz,
    xticks = (1:Nv, round.(v, digits = 2)),
    yticks = (1:Nv, round.(v, digits = 2)),
    xlabel = "sigma v",
    ylabel = "sigma w",
) # Yes, labels are reversed

Something seems to be off with this figure as the hottest spot is not really where we would expect it

Optimization of the log likelihood can be done by, e.g., global/black box methods, see BlackBoxOptim.jl. Standard tricks apply, such as performing the parameter search in log-space etc.

Bayesian inference using PMMH

This is pretty cool. We procede like we did for MAP above, but when calling the function metropolis, we will get the entire posterior distribution of the parameter vector, for the small cost of a massive increase in computational cost.

N = 1000
filter_from_parameters(θ, pf = nothing) = AuxiliaryParticleFilter(
    N,
    dynamics,
    measurement,
    MvNormal(n, exp(θ[1])),
    MvNormal(p, exp(θ[2])),
    d0,
)
filter_from_parameters (generic function with 2 methods)

The call to exp on the parameters is so that we can define log-normal priors

priors = [Normal(0,2),Normal(0,2)]
ll     = log_likelihood_fun(filter_from_parameters,priors,u,y)
θ₀     = log.([1.,1.]) # Starting point
2-element Vector{Float64}:
 0.0
 0.0

We also need to define a function that suggests a new point from the "proposal distribution". This can be pretty much anything, but it has to be symmetric since I was lazy and simplified an equation.

draw   = θ -> θ .+ rand(MvNormal(0.05ones(2)))
burnin = 200
@info "Starting Metropolis algorithm"
@time theta, lls = metropolis(ll, 1200, θ₀, draw) # Run PMMH for 1200  iterations, takes about half a minute on my laptop
thetam = reduce(hcat, theta)'[burnin+1:end,:] # Build a matrix of the output (was vecofvec)
histogram(exp.(thetam), layout=(3,1)); plot!(lls[burnin+1:end], subplot=3) # Visualize

If you are lucky, you can run the above threaded as well. I tried my best to make particle fitlers thread safe with their own rngs etc., but your milage may vary.

@time thetalls = LowLevelParticleFilters.metropolis_threaded(burnin, ll, 500, θ₀, draw)
histogram(exp.(thetalls[:,1:2]), layout=3)
plot!(thetalls[:,3], subplot=3)

AdvancedParticleFilter

The AdvancedParticleFilter type requires you to implement the same functions as the regular ParticleFilter, but in this case you also need to handle sampling from the noise distributions yourself. The function dynamics must have a method signature like below. It must provide one method that accepts state vector, control vector, time and noise::Bool that indicates whether or not to add noise to the state. If noise should be added, this should be done inside dynamics An example is given below

using Random
const rng = Random.MersenneTwister()
function dynamics(x,u,t,noise=false) # It's important that this defaults to false
    x = A*x .+ B*u # A simple dynamics model
    if noise
        x += rand(rng, df) # it's faster to supply your own rng
    end
    x
end
dynamics (generic function with 2 methods)

The measurement_likelihood function must have a method accepting state, measurement and time, and returning the log-likelihood of the measurement given the state, a simple example below:

function measurement_likelihood(x,u,y,t)
    logpdf(dg, C*x-y) # A simple linear measurement model with normal additive noise
end
measurement_likelihood (generic function with 1 method)

This gives you very high flexibility. The noise model in either function can, for instance, be a function of the state, something that is not possible for the simple ParticleFilter To be able to simulate the AdvancedParticleFilter like we did with the simple filter above, the measurement method with the signature measurement(x,u,t,noise=false) must be available and return a sample measurement given state (and possibly time). For our example measurement model above, this would look like this

measurement(x,u,t,noise=false) = C*x + noise*rand(rng, dg)
measurement (generic function with 2 methods)

We now create the AdvancedParticleFilter and use it in the same way as the other filters:

apf = AdvancedParticleFilter(N, dynamics, measurement, measurement_likelihood, df, d0)
x,w,we,ll = forward_trajectory(apf, u, y)
(StaticArrays.SVector{2, Float64}[[0.8815825714668946, -1.7234918748222905] [-0.33555226141033634, -3.0812230574090456] … [-12.018808804012982, -10.634289377917938] [-13.034856471892988, -9.065595387248793]; [1.8070108303474783, -0.38030672404646526] [0.5888111378608156, -2.5707752073288197] … [-12.030137441489044, -8.389011635663849] [-10.21802326095474, -7.688332658331285]; … ; [2.3160784496058, 0.28054683526269436] [0.41785838288747457, -0.9300552476319763] … [-12.223233404351468, -8.656620833779538] [-11.773466437077161, -6.06372510688558]; [0.13679867256055933, 0.6185221640706856] [-1.128053754946873, -0.26131432795779963] … [-13.361991543710978, -8.682065408259467] [-13.2144588788736, -6.937961133405412]], [-6.834347670915963 -6.6420776786128926 … -7.992241976357514 -10.634658257929189; -6.645593432891565 -7.365484441512548 … -6.792433910358003 -6.328229956328662; … ; -6.817183802344226 -7.673217492469664 … -6.935390560267928 -7.426618109147918; -6.316273833788558 -6.273060472595255 … -7.453616123630505 -10.079035182667777], [0.00107616910273399 0.0013043144803865606 … 0.00033807527629100637 2.4067257337407926e-5; 0.0012997368827407875 0.0006327188169669159 … 0.001122234030433536 0.0017851908399489648; … ; 0.001094799757450945 0.0004651188970548302 … 0.000972743079717386 0.0005951970033526004; 0.0018066629059315558 0.0018864463222259984 … 0.0005793428444290155 4.194987173317474e-5], -3243.186552975877)

trajectorydensity(apf, x, we, u, y, xreal=xs)

We can even use this type as an AuxiliaryParticleFilter

apfa = AuxiliaryParticleFilter(apf)
x,w,we,ll = forward_trajectory(apfa, u, y)
trajectorydensity(apfa, x, we, u, y, xreal=xs)
dimensiondensity(apfa, x, we, u, y, 1, xreal=xs) # Same as above, but only plots a single dimension